8 research outputs found

    Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

    Full text link
    The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include light field or image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One of the key advantages of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on Graphics for revie

    Texturing 3D Reconstructions from Multi-Scale High-Resolution Images

    No full text
    Texturing is typically the last step within the pipeline for 3D object reconstruction. In the last decade image based 3D reconstruction algorithms became robust and efficient enough to reconstruct even large and unconstrained scenes. However, texturing algorithms struggle with such datasets. This work introduces an algorithm that creates a high quality texture for the resulting meshes of such datasets. Based on Lempitsky and Ivanov's work 19 we use graph cuts to simultaneously select an appropriate view for each face and minimize visible seams. Within the view selection we account for sharpness, resolution and distance of the view as well as the viewing angle. In order to handle the remaining visible seams we use a global seam leveling procedure to adjust color values of the texture patches. Apart from this adjustment we do not blend or resample the input images in any way such that the resulting texture patches have almost identical quality compared to the input images. The algorithm handles datasets with more than 500 high resolution images and meshes with over 8 million triangles in a few hours

    Texturing 3D Reconstructions from High-Resolution Multi-Scale Images

    No full text
    Texturing is typically the last step within the pipeline for 3D object reconstruction. In the last decade image based 3D reconstruction algorithms became robust and efficient enough to reconstruct even large and unconstrained scenes. However, texturing algorithms struggle with such datasets. This work introduces an algorithm that creates a high quality texture for the resulting meshes of such datasets. Based on Lempitsky and Ivanov's work 19 we use graph cuts to simultaneously select an appropriate view for each face and minimize visible seams. Within the view selection we account for sharpness, resolution and distance of the view as well as the viewing angle. In order to handle the remaining visible seams we use a global seam leveling procedure to adjust color values of the texture patches. Apart from this adjustment we do not blend or resample the input images in any way such that the resulting texture patches have almost identical quality compared to the input images. The algorithm handles datasets with more than 500 high resolution images and meshes with over 8 million triangles in a few hours

    UAV Capture Planning for City-Scale 3D Reconstructions

    No full text

    personal use. Not for redistribution. The final publication is available at link.springer.com. Let There Be Color! Large-Scale Texturing of 3D Reconstructions

    No full text
    Abstract. 3D reconstruction pipelines using structure-from-motion and multi-view stereo techniques are today able to reconstruct impressive, large-scale geometry models from images but do not yield textured results. Current texture creation methods are unable to handle the complexity and scale of these models. We therefore present the first comprehensive texturing framework for large-scale, real-world 3D reconstructions. Our method addresses most challenges occurring in such reconstructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders (e.g., moving plants or pedestrians). Using the proposed technique, we are able to texture datasets that are several orders of magnitude larger and far more challenging than shown in related work.

    MVE—An image-based reconstruction environment

    No full text
    This is the authors ’ version of the work. It is posted here by permission of Elsevier for personal use
    corecore